Install necessary libraries
#!pip install --upgrade --trusted-host pypi.org --trusted-host files.pythonhosted.org azure-cognitiveservices-vision-computervision
Collecting azure-cognitiveservices-vision-computervision Downloading azure_cognitiveservices_vision_computervision-0.7.0-py2.py3-none-any.whl (35 kB) Collecting msrest>=0.5.0 Downloading msrest-0.6.19-py2.py3-none-any.whl (84 kB) Collecting azure-common~=1.1 Downloading azure_common-1.1.26-py2.py3-none-any.whl (12 kB) Collecting isodate>=0.6.0 Downloading isodate-0.6.0-py2.py3-none-any.whl (45 kB) Collecting requests-oauthlib>=0.5.0 Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB) Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\users\hcsra95\anaconda3\lib\site-packages (from msrest>=0.5.0->azure-cognitiveservices-vision-computervision) (2020.6.20) Requirement already satisfied, skipping upgrade: requests~=2.16 in c:\users\hcsra95\anaconda3\lib\site-packages (from msrest>=0.5.0->azure-cognitiveservices-vision-computervision) (2.24.0) Requirement already satisfied, skipping upgrade: six in c:\users\hcsra95\anaconda3\lib\site-packages (from isodate>=0.6.0->msrest>=0.5.0->azure-cognitiveservices-vision-computervision) (1.15.0) Collecting oauthlib>=3.0.0 Downloading oauthlib-3.1.0-py2.py3-none-any.whl (147 kB) Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\hcsra95\anaconda3\lib\site-packages (from requests~=2.16->msrest>=0.5.0->azure-cognitiveservices-vision-computervision) (1.25.11) Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in c:\users\hcsra95\anaconda3\lib\site-packages (from requests~=2.16->msrest>=0.5.0->azure-cognitiveservices-vision-computervision) (2.10) Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in c:\users\hcsra95\anaconda3\lib\site-packages (from requests~=2.16->msrest>=0.5.0->azure-cognitiveservices-vision-computervision) (3.0.4) Installing collected packages: isodate, oauthlib, requests-oauthlib, msrest, azure-common, azure-cognitiveservices-vision-computervision Successfully installed azure-cognitiveservices-vision-computervision-0.7.0 azure-common-1.1.26 isodate-0.6.0 msrest-0.6.19 oauthlib-3.1.0 requests-oauthlib-1.3.0
#!pip install --upgrade --trusted-host pypi.org --trusted-host files.pythonhosted.org pillow
Collecting pillow
Using cached Pillow-8.1.0-cp38-cp38-win_amd64.whl (2.2 MB)
Installing collected packages: pillow
Attempting uninstall: pillow
Found existing installation: Pillow 8.1.0
Uninstalling Pillow-8.1.0:
Successfully uninstalled Pillow-8.1.0
Successfully installed pillow-8.1.0
Import necessary libraries
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
from azure.cognitiveservices.vision.computervision.models import VisualFeatureTypes
from msrest.authentication import CognitiveServicesCredentials
from array import array
import os
from PIL import Image
import sys
import time
import requests
import io
import IPython
Define subscription endpoint and key
subscription_key = "PLEASE_ENTER_YOUR_OWN_KEY"
endpoint = "https://PLEASE_ENTER_YOUR_OWN_ENDPOINT_NAME.cognitiveservices.azure.com/"
Define the authentication client
computervision_client = ComputerVisionClient(endpoint, CognitiveServicesCredentials(subscription_key))
Define the image URL
remote_image_url = "https://upload.wikimedia.org/wikipedia/commons/b/b1/AIA_Tower_2013.jpg"
IPython.display.Image(remote_image_url, width = 250)
Get image description
'''
Describe an Image - remote
This example describes the contents of an image with the confidence score.
'''
print("===== Describe an image - remote =====")
# Call API
description_results = computervision_client.describe_image(remote_image_url )
# Get the captions (descriptions) from the response, with confidence level
print("Description of remote image: ")
if (len(description_results.captions) == 0):
print("No description detected.")
else:
for caption in description_results.captions:
print("'{}' with confidence {:.2f}%".format(caption.text, caption.confidence * 100))
===== Describe an image - remote ===== Description of remote image: 'a city with tall buildings' with confidence 36.78%
Get image category
'''
Categorize an Image - remote
This example extracts (general) categories from a remote image with a confidence score.
'''
print("===== Categorize an image - remote =====")
# Select the visual feature(s) you want.
remote_image_features = ["categories"]
# Call API with URL and features
categorize_results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features)
# Print results with confidence score
print("Categories from remote image: ")
if (len(categorize_results_remote.categories) == 0):
print("No categories detected.")
else:
for category in categorize_results_remote.categories:
print("'{}' with confidence {:.2f}%".format(category.name, category.score * 100))
===== Categorize an image - remote ===== Categories from remote image: 'building_street' with confidence 69.92% 'outdoor_' with confidence 0.39%
Get image tags
'''
Tag an Image - remote
This example returns a tag (key word) for each thing in the image.
'''
print("===== Tag an image - remote =====")
# Call API with remote image
tags_result_remote = computervision_client.tag_image(remote_image_url )
# Print results with confidence score
print("Tags in the remote image: ")
if (len(tags_result_remote.tags) == 0):
print("No tags detected.")
else:
for tag in tags_result_remote.tags:
print("'{}' with confidence {:.2f}%".format(tag.name, tag.confidence * 100))
===== Tag an image - remote ===== Tags in the remote image: 'outdoor' with confidence 97.47% 'tower' with confidence 95.82% 'downtown' with confidence 85.99% 'building' with confidence 80.83% 'skyline' with confidence 72.98% 'cityscape' with confidence 72.61% 'city' with confidence 66.68% 'high rise' with confidence 64.26% 'tower block' with confidence 56.61% 'metropolitan area' with confidence 51.59% 'skyscraper' with confidence 26.56%
Detect objects
IPython.display.Image(remote_image_url_objects, width = 250)
'''
Detect Objects - remote
This example detects different kinds of objects with bounding boxes in a remote image.
'''
print("===== Detect Objects - remote =====")
# Get URL image with different objects
remote_image_url_objects = "https://hkow.hk/wp-content/uploads/2018/09/HKOW-Pano-1.jpg"
# Call API with URL
detect_objects_results_remote = computervision_client.detect_objects(remote_image_url_objects)
# Print detected objects results with bounding boxes
print("Detecting objects in remote image:")
if len(detect_objects_results_remote.objects) == 0:
print("No objects detected.")
else:
for object in detect_objects_results_remote.objects:
print("object at location {}, {}, {}, {}".format( \
object.rectangle.x, object.rectangle.x + object.rectangle.w, \
object.rectangle.y, object.rectangle.y + object.rectangle.h))
===== Detect Objects - remote ===== Detecting objects in remote image: object at location 5, 57, 484, 652 object at location 414, 505, 500, 658 object at location 1180, 1299, 307, 691 object at location 156, 300, 483, 687 object at location 744, 1215, 41, 707
Detect brands
IPython.display.Image(remote_image_url, width = 250)
'''
Detect Brands - remote
This example detects common brands like logos and puts a bounding box around them.
'''
print("===== Detect Brands - remote =====")
# Get a URL with a brand logo
remote_image_url = "https://www.reviewjournal.com/wp-content/uploads/2019/12/13098467_web1_111.jpg"
# Select the visual feature(s) you want
remote_image_features = ["brands"]
# Call API with URL and features
detect_brands_results_remote = computervision_client.analyze_image(remote_image_url, remote_image_features)
print("Detecting brands in remote image: ")
if len(detect_brands_results_remote.brands) == 0:
print("No brands detected.")
else:
for brand in detect_brands_results_remote.brands:
print("'{}' brand detected with confidence {:.1f}% at location {}, {}, {}, {}".format( \
brand.name, brand.confidence * 100, brand.rectangle.x, brand.rectangle.x + brand.rectangle.w, \
brand.rectangle.y, brand.rectangle.y + brand.rectangle.h))
===== Detect Brands - remote ===== Detecting brands in remote image: 'Apple' brand detected with confidence 83.6% at location 890, 935, 347, 413
Detect faces
IPython.display.Image(remote_image_url_faces, width = 250)
'''
Detect Faces - remote
This example detects faces in a remote image, gets their gender and age,
and marks them with a bounding box.
'''
print("===== Detect Faces - remote =====")
# Get an image with faces
remote_image_url_faces = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/faces.jpg"
# Select the visual feature(s) you want.
remote_image_features = ["faces"]
# Call the API with remote URL and features
detect_faces_results_remote = computervision_client.analyze_image(remote_image_url_faces, remote_image_features)
# Print the results with gender, age, and bounding box
print("Faces in the remote image: ")
if (len(detect_faces_results_remote.faces) == 0):
print("No faces detected.")
else:
for face in detect_faces_results_remote.faces:
print("'{}' of age {} at location {}, {}, {}, {}".format(face.gender, face.age, \
face.face_rectangle.left, face.face_rectangle.top, \
face.face_rectangle.left + face.face_rectangle.width, \
face.face_rectangle.top + face.face_rectangle.height))
===== Detect Faces - remote ===== Faces in the remote image: 'Male' of age 39 at location 118, 159, 212, 253 'Male' of age 54 at location 492, 111, 582, 201 'Female' of age 55 at location 18, 153, 102, 237 'Female' of age 33 at location 386, 166, 467, 247 'Female' of age 18 at location 235, 158, 311, 234 'Female' of age 8 at location 323, 163, 391, 231
Detect adult, racy, or gory content
'''
Detect Adult or Racy Content - remote
This example detects adult or racy content in a remote image, then prints the adult/racy score.
The score is ranged 0.0 - 1.0 with smaller numbers indicating negative results.
'''
print("===== Detect Adult or Racy Content - remote =====")
# Select the visual feature(s) you want
remote_image_features = ["adult"]
# Call API with URL and features
detect_adult_results_remote = computervision_client.analyze_image(remote_image_url, remote_image_features)
# Print results with adult/racy score
print("Analyzing remote image for adult or racy content ... ")
print("Is adult content: {} with confidence {:.2f}".format(detect_adult_results_remote.adult.is_adult_content, detect_adult_results_remote.adult.adult_score * 100))
print("Has racy content: {} with confidence {:.2f}".format(detect_adult_results_remote.adult.is_racy_content, detect_adult_results_remote.adult.racy_score * 100))
===== Detect Adult or Racy Content - remote ===== Analyzing remote image for adult or racy content ... Is adult content: False with confidence 0.06 Has racy content: False with confidence 0.07
Get image color scheme
'''
Detect Color - remote
This example detects the different aspects of its color scheme in a remote image.
'''
print("===== Detect Color - remote =====")
# Select the feature(s) you want
remote_image_features = ["color"]
# Call API with URL and features
detect_color_results_remote = computervision_client.analyze_image(remote_image_url, remote_image_features)
# Print results of color scheme
print("Getting color scheme of the remote image: ")
print("Is black and white: {}".format(detect_color_results_remote.color.is_bw_img))
print("Accent color: {}".format(detect_color_results_remote.color.accent_color))
print("Dominant background color: {}".format(detect_color_results_remote.color.dominant_color_background))
print("Dominant foreground color: {}".format(detect_color_results_remote.color.dominant_color_foreground))
print("Dominant colors: {}".format(detect_color_results_remote.color.dominant_colors))
===== Detect Color - remote ===== Getting color scheme of the remote image: Is black and white: False Accent color: 8A6641 Dominant background color: Grey Dominant foreground color: Grey Dominant colors: ['Grey']
Get domain-specific content
IPython.display.Image(remote_image_url_celebs, width = 250)
'''
Detect Domain-specific Content - remote
This example detects celebrites and landmarks in remote images.
'''
print("===== Detect Domain-specific Content - remote =====")
# URL of one or more celebrities
remote_image_url_celebs = "https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/prod/2019/01/Microsoft-Founded-1-768x512.jpg"
# Call API with content type (celebrities) and URL
detect_domain_results_celebs_remote = computervision_client.analyze_image_by_domain("celebrities", remote_image_url_celebs)
# Print detection results with name
print("Celebrities in the remote image:")
if len(detect_domain_results_celebs_remote.result["celebrities"]) == 0:
print("No celebrities detected.")
else:
for celeb in detect_domain_results_celebs_remote.result["celebrities"]:
print(celeb["name"])
===== Detect Domain-specific Content - remote ===== Celebrities in the remote image: Bill Gates Jim Lane Bob Wallace
IPython.display.Image(remote_image_url_land, width = 250)
# Call API with content type (landmarks) and URL
remote_image_url_land = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/landmark.jpg"
detect_domain_results_landmarks = computervision_client.analyze_image_by_domain("landmarks", remote_image_url_land)
print()
print("Landmarks in the remote image:")
if len(detect_domain_results_landmarks.result["landmarks"]) == 0:
print("No landmarks detected.")
else:
for landmark in detect_domain_results_landmarks.result["landmarks"]:
print(landmark["name"])
Landmarks in the remote image: Colosseum
Read printed and handwritten text
IPython.display.Image(remote_image_handw_text_url, width = 250)
# Call the Read API
'''
Batch Read File, recognize handwritten text - remote
This example will extract handwritten text in an image, then print results, line by line.
This API call can also recognize handwriting (not shown).
'''
print("===== Batch Read File - remote =====")
# Get an image with handwritten text
remote_image_handw_text_url = "https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/cognitive-services/Computer-vision/Images/readsample.jpg"
# Call API with URL and raw response (allows you to get the operation location)
recognize_handw_results = computervision_client.read(remote_image_handw_text_url, raw=True)
===== Batch Read File - remote =====
# Get Read results
# Get the operation location (URL with an ID at the end) from the response
operation_location_remote = recognize_handw_results.headers["Operation-Location"]
# Grab the ID from the URL
operation_id = operation_location_remote.split("/")[-1]
# Call the "GET" API and wait for it to retrieve the results
while True:
get_handw_text_results = computervision_client.get_read_result(operation_id)
if get_handw_text_results.status not in ['notStarted', 'running']:
break
time.sleep(1)
# Print the detected text, line by line
if get_handw_text_results.status == OperationStatusCodes.succeeded:
for text_result in get_handw_text_results.analyze_result.read_results:
for line in text_result.lines:
print(line.text)
print(line.bounding_box)
print()
The quick brown fox jumps [38.0, 650.0, 2572.0, 699.0, 2570.0, 854.0, 37.0, 815.0] over [184.0, 1053.0, 508.0, 1044.0, 510.0, 1123.0, 184.0, 1128.0] the lazy dog! [639.0, 1011.0, 1976.0, 1026.0, 1974.0, 1158.0, 637.0, 1141.0]
IPython.display.Image(remote_image_handw_text_url_2, width = 250)
# Call the Read API
'''
Batch Read File, recognize handwritten text - remote
This example will extract handwritten text in an image, then print results, line by line.
This API call can also recognize handwriting (not shown).
'''
print("===== Batch Read File - remote =====")
# Get an image with handwritten text
remote_image_handw_text_url_2 = "https://www.aimovement.org/csi/Churchill/churchill_belle_11_24_93_05.jpg"
# Call API with URL and raw response (allows you to get the operation location)
recognize_handw_results = computervision_client.read(remote_image_handw_text_url_2, raw=True)
===== Batch Read File - remote =====
# Get Read results
# Get the operation location (URL with an ID at the end) from the response
operation_location_remote = recognize_handw_results.headers["Operation-Location"]
# Grab the ID from the URL
operation_id = operation_location_remote.split("/")[-1]
# Call the "GET" API and wait for it to retrieve the results
while True:
get_handw_text_results = computervision_client.get_read_result(operation_id)
if get_handw_text_results.status not in ['notStarted', 'running']:
break
time.sleep(1)
# Print the detected text, line by line
if get_handw_text_results.status == OperationStatusCodes.succeeded:
for text_result in get_handw_text_results.analyze_result.read_results:
for line in text_result.lines:
print(line.text)
print(line.bounding_box)
print()
In your May 1, 1986 letter, you acknowledge receipt of an invitation, [116.0, 72.0, 546.0, 69.0, 546.0, 83.0, 116.0, 86.0] and you agreed to attend the 11th Annual IITC Conference held at Big [81.0, 87.0, 517.0, 85.0, 517.0, 99.0, 81.0, 102.0] Mountain Dineh Nation. It was clear that both of you knew that you would [83.0, 102.0, 542.0, 100.0, 542.0, 113.0, 83.0, 115.0] be confronted due to your devisive actions. These actions included a trip [82.0, 117.0, 531.0, 115.0, 531.0, 128.0, 82.0, 131.0] by one or both of you to Big Mountain during this period of time in order [82.0, 132.0, 536.0, 130.0, 536.0, 143.0, 82.0, 146.0] to disrupt the unity between the people of Big Mountain and AIM and IITC [82.0, 147.0, 549.0, 143.0, 549.0, 158.0, 82.0, 162.0] by your promotion of "new AIM" vs. "old AIM." This is your language. (See [83.0, 162.0, 542.0, 160.0, 543.0, 174.0, 83.0, 177.0] attached an open letter to all American Indian Movement coordinators from Big [82.0, 176.0, 517.0, 174.0, 517.0, 186.0, 82.0, 189.0] Mountain/Hopi partitioned lands). [81.0, 188.0, 274.0, 187.0, 274.0, 200.0, 81.0, 201.0] Your actions also included a demonstration against the solidarity [120.0, 212.0, 519.0, 211.0, 519.0, 225.0, 120.0, 227.0] groups in Denver who had organized a forum on the struggle at Big [80.0, 228.0, 498.0, 226.0, 498.0, 240.0, 81.0, 243.0] Mountain and the Indigenous people's struggles in Central America at St. [84.0, 243.0, 533.0, 240.0, 533.0, 254.0, 84.0, 257.0] Cajetans Church in Denver. When Bill Means and Vernon Bellecourt arrived [84.0, 258.0, 547.0, 255.0, 547.0, 268.0, 84.0, 272.0] they were at first happy to see what they thought were American Indian [83.0, 273.0, 536.0, 270.0, 536.0, 284.0, 83.0, 287.0] Movement people, even though with the exception of two or three Indian [84.0, 288.0, 536.0, 286.0, 536.0, 299.0, 84.0, 302.0] people, most were a rag tag group of non-Indians. It then dawned on Bill [83.0, 304.0, 536.0, 300.0, 536.0, 313.0, 83.0, 318.0] and Vernon that these people were, in fact, picketing against the event. [83.0, 318.0, 525.0, 316.0, 525.0, 330.0, 83.0, 332.0] When confronted and asked to come into the event, thinking that they [84.0, 333.0, 518.0, 331.0, 518.0, 345.0, 84.0, 347.0] might realize their folly, and asked also who put them up to this action, [84.0, 348.0, 525.0, 346.0, 525.0, 360.0, 84.0, 362.0] they stated -- Ward Churchill and Glenn Morris put them up to it. When [84.0, 363.0, 521.0, 361.0, 521.0, 374.0, 84.0, 377.0] asked where you both were, the response was that you were in South [83.0, 378.0, 515.0, 376.0, 515.0, 390.0, 83.0, 392.0] Dakota. Upon investigation and from your own admittance, we found out [84.0, 393.0, 533.0, 391.0, 533.0, 404.0, 84.0, 407.0] that you were actually in South Dakota manipulating some the leadership [84.0, 408.0, 538.0, 405.0, 538.0, 419.0, 84.0, 422.0] of the movement. [83.0, 423.0, 198.0, 423.0, 198.0, 436.0, 83.0, 436.0] Upon entering the event at St. Cajetans Church, some of your [120.0, 452.0, 496.0, 451.0, 496.0, 465.0, 120.0, 467.0] Anglo/Caucasian goon-types tried to disrupt the event by verbally and [84.0, 469.0, 524.0, 466.0, 524.0, 480.0, 84.0, 483.0] physically attacking some of the organizational leaders of the event at [83.0, 484.0, 519.0, 481.0, 519.0, 495.0, 83.0, 499.0] which time the goons were expelled from the church. When the event [85.0, 499.0, 516.0, 496.0, 516.0, 510.0, 85.0, 513.0] organizers were asked, "Where were the people of Big Mountain?" the [84.0, 514.0, 516.0, 510.0, 516.0, 524.0, 84.0, 528.0] answer from one of the demonstrators, who by now knew they were being [84.0, 528.0, 548.0, 527.0, 548.0, 540.0, 84.0, 542.0] used, was ,"Ward Churchill went to Big Mountain in order to disrupt the [85.0, 543.0, 527.0, 541.0, 527.0, 555.0, 85.0, 557.0] event." (Note: Vemnon Bellecourt is one of the founders of the Denver Chapter of [84.0, 559.0, 526.0, 557.0, 526.0, 569.0, 84.0, 572.0] AIM. see attached letter dated February 27. 1986 by Morris and Churchill along with [84.0, 572.0, 539.0, 570.0, 539.0, 582.0, 85.0, 585.0] the article, "Hot Coals.") [84.0, 585.0, 216.0, 584.0, 216.0, 596.0, 84.0, 597.0] Bill Means stated in his September 23, 1986 letter to both of you [121.0, 609.0, 522.0, 608.0, 522.0, 621.0, 121.0, 623.0] terminating your association with IITC, "your activities became a great [85.0, 625.0, 522.0, 623.0, 523.0, 636.0, 85.0, 639.0] disappointment, and it is unfortunate that you allowed ambition and other [84.0, 641.0, 547.0, 638.0, 547.0, 651.0, 84.0, 654.0] pressures to put an end to your affiliation with us." Your response to this [84.0, 656.0, 536.0, 653.0, 536.0, 666.0, 85.0, 669.0] termination was one or both of you to drafted a letter on behalf of the [85.0, 671.0, 519.0, 667.0, 519.0, 681.0, 85.0, 684.0] Colorado American Indian Movement to William Means and William [85.0, 684.0, 508.0, 682.0, 508.0, 696.0, 85.0, 698.0] Wahpepah dated November 1, 1986, which was signed by several honest, [86.0, 700.0, 541.0, 698.0, 541.0, 712.0, 86.0, 714.0] 5 [295.0, 742.0, 305.0, 742.0, 305.0, 753.0, 295.0, 753.0]